Tensor decomposition to compress convolutional layers in deep learning

نویسندگان

چکیده

Feature extraction for tensor data serves as an important step in many tasks such anomaly detection, process monitoring, image classification, and quality control. Although methods have been proposed feature extraction, there are still two challenges that need to be addressed: (i) how reduce the computation cost high dimensional large volume data; (ii) interpret output features evaluate their significance. The most recent deep learning, Convolutional Neural Network, shown outstanding performance analyzing data, but wide adoption is hindered by model complexity lack of interpretability. To fill this research gap, we propose use CP-decomposition approximately compress convolutional layer (CPAC-Conv layer) learning. contributions our work include three aspects: adapt kernels derive expressions forward backward propagations CPAC-Conv layer; compared with original layer, can number parameters without decaying prediction performance. It combine other layers build novel Deep Networks; (iii) value decomposed indicates significance corresponding map, which provides us insights guide selection.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning a Virtual Codec Based on Deep Convolutional Neural Network to Compress Image

Although deep convolutional neural network has been proved to efficiently eliminate coding artifacts caused by the coarse quantization of traditional codec, it’s difficult to train any neural network in front of the encoder for gradient’s back-propagation. In this paper, we propose an end-to-end image compression framework based on convolutional neural network to resolve the problem of non-diff...

متن کامل

Pooling the Convolutional Layers in Deep ConvNets for Action Recognition

Deep ConvNets have shown its good performance in image classification tasks. However it still remains as a problem in deep video representation for action recognition. The problem comes from two aspects: on one hand, current video ConvNets are relatively shallow compared with image ConvNets, which limits its capability of capturing the complex video action information; on the other hand, tempor...

متن کامل

Tartan: Accelerating Fully-Connected and Convolutional Layers in Deep Learning Networks by Exploiting Numerical Precision Variability

Tartan (TRT), a hardware accelerator for inference with Deep Neural Networks (DNNs), is presented and evaluated on Convolutional Neural Networks. TRT exploits the variable per layer precision requirements of DNNs to deliver execution time that is proportional to the precision p in bits used per layer for convolutional and fully-connected layers. Prior art has demonstrated an accelerator with th...

متن کامل

Convolutional Dictionary Learning through Tensor Factorization

Tensor methods have emerged as a powerful paradigm for consistent learning of many latent variable models such as topic models, independent component analysis and dictionary learning. Model parameters are estimated via CP decomposition of the observed higher order input moments. However, in many domains, additional invariances such as shift invariances exist, enforced via models such as convolu...

متن کامل

Unsupervised Learning of Word-Sequence Representations from Scratch via Convolutional Tensor Decomposition

Unsupervised text embeddings extraction is crucial for text understanding in machine learning. Word2Vec and its variants have received substantial success in mapping words with similar syntactic or semantic meaning to vectors close to each other. However, extracting context-aware word-sequence embedding remains a challenging task. Training over large corpus is difficult as labels are difficult ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IISE transactions

سال: 2021

ISSN: ['2472-5854', '2472-5862']

DOI: https://doi.org/10.1080/24725854.2021.1894514